-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated C wrapper wrt. Torch v1.10 #61
Conversation
acb50ff
to
6d3200a
Compare
What's the status now? |
It's been a idle for a while, but status is summarised by these comments: |
6d3200a
to
6afc28e
Compare
See #54 for status. |
6afc28e
to
f407437
Compare
Copied from https://github.com/LaurentMazare/ocaml-torch/tree/a6499811f40282a071179d4306afbbb6023dcc4a/src/gen/gen.ml Also updated dune-project accordingly.
6f18889
to
beb24f2
Compare
7f5dbfa
to
aa10911
Compare
Factored out scripts - for re-use in CI and dev. container.
aa10911
to
37c6056
Compare
Perhaps it would make sense to start merging some of the excellent changes here? |
Yes! :-) Please give Edit: I'll try to go over it as well and try to make a re-cap of the changes. |
This is the current main diff which covers the hand-written part ( |
The overall aim was to to update for Torch v1.10.2 - but also to make it easier to apply a diff of changes for subsequent version updates... |
Recap: General
Modified function definitionsint at_float_vec(double *values, int value_len, int type); int at_int_vec(int64_t *values, int value_len, int type); int at_grad_set_enabled(int); int at_int64_value_at_indexes(double *i, tensor, int *indexes, int indexes_len); tensor at_load(char *filename); int ato_adam(optimizer *, double learning_rate,
double beta1,
double beta2,
double weight_decay); int atm_load(char *, module *); Added function definitionsint at_is_sparse(int *, tensor)
int at_device(int *, tensor) int at_stride(tensor, int *) int at_autocast_clear_cache();
int at_autocast_decrement_nesting(int *);
int at_autocast_increment_nesting(int *);
int at_autocast_is_enabled(int *);
int at_autocast_set_enabled(int *, int b); int at_to_string(char **, tensor, int line_size) int at_get_num_threads(int *);
int at_set_num_threads(int n_threads); int ati_none(ivalue *);
int ati_bool(ivalue *, int);
int ati_string(ivalue *, char *);
int ati_tuple(ivalue *, ivalue *, int);
int ati_generic_list(ivalue *, ivalue *, int);
int ati_generic_dict(ivalue *, ivalue *, int);
int ati_int_list(ivalue *, int64_t *, int);
int ati_double_list(ivalue *, double *, int);
int ati_bool_list(ivalue *, char *, int);
int ati_string_list(ivalue *, char **, int);
int ati_tensor_list(ivalue *, tensor *, int); int ati_to_string(char **, ivalue);
int ati_to_bool(int *, ivalue);
int ati_length(int *, ivalue);
int ati_to_generic_list(ivalue, ivalue *, int);
int ati_to_generic_dict(ivalue, ivalue *, int);
int ati_to_int_list(ivalue, int64_t *, int);
int ati_to_double_list(ivalue, double *, int);
int ati_to_bool_list(ivalue, char *, int);
int ati_to_tensor_list(ivalue, tensor *, int); |
@DhairyaLGandhi Do you know if anyone is available for reviewing these changes? I'm at JuliaCon, FYI. |
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper).
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Torch_jll uses extended platform selection (platform augmentation) which requires Julia 1.7+. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Torch_jll uses extended platform selection (platform augmentation) which requires Julia 1.7+. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Torch_jll uses extended platform selection (platform augmentation) which requires Julia 1.7+. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Torch_jll uses extended platform selection (platform augmentation) which requires Julia 1.7+. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in FluxML#61 (Updated C wrapper): at_grad_set_enabled.
Updated for Torch v1.10.2, and NNlib >= v0.8 Replaced Torch_jll v1.4 with TorchCAPI_jll v0.2; TorchCAPI_jll v0.2 is supported by Torch_jll v1.10.2. Torch_jll uses extended platform selection (platform augmentation) which requires Julia 1.7+. Updated Julia wrapper using Julia wrapper generator (using Julia 1.11). Updated usages of functions changed in #61 (Updated C wrapper): E.g., at_grad_set_enabled. Also: * Changed tests to make CUDA optional * CI: Buildkite: Added testing on CUDA 11.3, and Julia 1.10, and 1.11 * CI: Added GitHub Actions workflow for running tests
Updates the C wrapper based on ocaml-torch @ 0.14 - matching Torch v1.10 (current JLL-build)
Contributes to #54 - follow-up for #56
Notable included changes:
torch_api.{cpp, h}
buildkite stepsGitHub Actions workflow for building C wrapperThe last two changes could be moved to a separate PR (to reduce number of changes in this PR).To-do:
torch_api.{cpp, h}
Changed torch_api.cpp to reduce diff
should be removed before merging: It is meant to reduce the diff when reviewing - it's only a bunch of indentation changes etc. to make the diff smaller.